We were tasked with running a Processing sketch that would utilise the OpenCV library

The library facilitates its developers with the creation of tools that enable computers to see more than meets the eye within the environment. For instance it may detect specific moving objects, people, image differences etc.

The tasks involved fiddling with OpenCV code, trying to come up with some interesting solutions and even more interesting use cases, especially in relation to our future project




My Surface Book’s camera does not work well with the processing.video library, as the video only works in predefined resolutions, crops off the video in many cases, the colours are skewed and tinted green, and the framerate is substantially bad

Getting a workaround implemented was the main challenge that I was still not able to technically overcome as of yet. Unfortunately the discussion boards do not provide with a straight answer and I have made attempts at introducing a few workaround, however none would work in relation to OpenCV

I was advised that the eerie performance was due to codec-related issues, however, I remain by my belief that processing.video library does not provide with a suitable technological solution for Surface computer's software. The codecs on my computer run anything and in addition, video encoding software such as Premiere operates on its own codecs, similarly to video playback-related software such as VLC



I was successfully able to run the OpenCV code and get some basic functionality up and running. In addition I successfully managed to get the code working, despite that it did not quite produce with the right output at first, prompting me to explore potential solutions, eventually finding the key: using an appropriate resolution (Full HD in my case as I believe it was)

I also managed to find a workaround to get a proper video feature implemented, that would function as a workaround compared to video.processing library



I wasn't able to get OpenCV to work entirely in proper fashion, as it glitches substantially: The colors are tinted green, the framerate is very low, so low the experience feels like a slideshow, the OpenCV features are limited, therefore it seems the library becomes affected altogether. In total, the library seems unusable on surface-related devices

Unfortunately, I was not able to get a Mac running at the station I was sitting at. There are some problems with running them in the Labs, however, I would still prefer to have OpenCV running on my PC, as I would be able to research more effectively and efficiently at my own pace, time and will on my own hardware

I will make a few more attempts merging the codes or perhaps messing with the codecs in relation to processing.video, although so far the attempts were futile


//Processing Code
// - Super Fast Blur v1.1 by Mario Klingemann 
// - BlobDetection library

import processing.video.*;
import blobDetection.*;
import de.humatic.dsj.*;
import java.awt.image.BufferedImage;

DCapture cap;
Capture cam;
BlobDetection theBlobDetection;
PImage img;
boolean newFrame=false;

// ==================================================
// setup()
// ==================================================
void setup()
{
  // Size of applet
  size(1920, 1080);
  background(0);
  cap = new DCapture();
  // Capture
  //cam = new Capture(this, 1920, 1080, 30);
        // Comment the following line if you use Processing 1.5
        //cam.start();
        
  // BlobDetection
  // img which will be sent to detection (a smaller copy of the cam frame);
  img = new PImage(80,60); 
  theBlobDetection = new BlobDetection(img.width, img.height);
  theBlobDetection.setPosDiscrimination(true);
  theBlobDetection.setThreshold(0.2f); // will detect bright areas whose luminosity > 0.2f;
}

// ==================================================
// captureEvent()
// ==================================================
void captureEvent(Capture cam)
{
  cam.read();
  newFrame = true;
}

// ==================================================
// draw()
// ==================================================
void draw()
{
  image(cap.updateImage(), 0, 0, cap.width, cap.height);
  if (newFrame)
  {
    newFrame=false;
    image(cam,0,0,width,height);
    img.copy(cam, 0, 0, cam.width, cam.height, 
        0, 0, img.width, img.height);
    fastblur(img, 2);
    theBlobDetection.computeBlobs(img.pixels);
    drawBlobsAndEdges(true,true);
  }
}

// ==================================================
// drawBlobsAndEdges()
// ==================================================
void drawBlobsAndEdges(boolean drawBlobs, boolean drawEdges)
{
  noFill();
  Blob b;
  EdgeVertex eA,eB;
  for (int n=0 ; n
// ==================================================
void fastblur(PImage img,int radius)
{
 if (radius<1){
    return;
  }
  int w=img.width;
  int h=img.height;
  int wm=w-1;
  int hm=h-1;
  int wh=w*h;
  int div=radius+radius+1;
  int r[]=new int[wh];
  int g[]=new int[wh];
  int b[]=new int[wh];
  int rsum,gsum,bsum,x,y,i,p,p1,p2,yp,yi,yw;
  int vmin[] = new int[max(w,h)];
  int vmax[] = new int[max(w,h)];
  int[] pix=img.pixels;
  int dv[]=new int[256*div];
  for (i=0;i<256*div;i++){
    dv[i]=(i/div);
  }

  yw=yi=0;

  for (y=0;y>16;
      gsum+=(p & 0x00ff00)>>8;
      bsum+= p & 0x0000ff;
    }
    for (x=0;x>16;
      gsum+=((p1 & 0x00ff00)-(p2 & 0x00ff00))>>8;
      bsum+= (p1 & 0x0000ff)-(p2 & 0x0000ff);
      yi++;
    }
    yw+=w;
  }

  for (x=0;x


  //Processing
import de.humatic.dsj.*;
import java.awt.image.BufferedImage;
 
class DCapture implements java.beans.PropertyChangeListener {
 
  private DSCapture capture;
  public int width, height;
 
  DCapture() {
    DSFilterInfo[][] dsi = DSCapture.queryDevices();
    capture = new DSCapture(DSFiltergraph.DD7, dsi[0][0], false, 
    DSFilterInfo.doNotRender(), this);
    width = capture.getDisplaySize().width;
    height = capture.getDisplaySize().height;
  }
 
  public PImage updateImage() {
    PImage img = createImage(width, height, RGB);
    BufferedImage bimg = capture.getImage();
    bimg.getRGB(0, 0, img.width, img.height, img.pixels, 0, img.width);
    img.updatePixels();
    return img;
  }
 
  public void propertyChange(java.beans.PropertyChangeEvent e) {
    switch (DSJUtils.getEventType(e)) {
    }
  }
}


So, to start off I started looking into OpenCV. However, it seemed as if it was malfunctioning, unfortunately. After the consultation I have done with my tutors, we came to a conclusion that Surface cams do not work properly with OpenCV and that the issue may be unresolvable. But I reckon anything can be resolved, so I looked for a solution and found one potential, that utilised a different technique of processing a video.

The solution was found here: Links to an external site.

And according to the last post, there may be a solution here: Links to an external site.

The first code didn’t work as I receive an error with a -9 being displayed. But it didn’t matter as I was more interested in the second code, which actually utilised my cam. The showcased solution was a bit counterintuitive, so I messed up my classes. Also, I accidently introduced a 32-bit version of the library which does not work with 64 bit version of my windows. Once I resolved that I looked on whether the video works.

It does! But it was cropped out due to resolution. But the colours were now natural and so was the framerate. Improvement!

I shifted the resolution from 640.480 to 1920.1080 the maximum offered by my cam. And bingo! Hello world!

Although my camera froze up eventually.


Well, it is my firm belief that computer vision is what drives our contemporary world. Whether it is us, riding over Harbour Bridge with cameras picturing the licence plates and sending the bill to us home, or us just walking around somewhere. Orwell is strong, although Australia probably could be pictured far from being a dystopian reality

For the base of our future project, we could potentially utilise OpenCV as a substitute for a range of sensors. After all, most of our devices are already equipped with such. My laptop has a camera. Your laptop has a camera. My phone has a camera. Once you chuck in software that can analyze what the camera pictures, you come up with a wide range of possibilities, especially once you consider how superior sensors become

Detecting movement is one use case. Detecting how far/close a person is - another. The only problem is, that we would have to utilise a device but that could ruin potential immersion capabilities


Hello World!


Not much, all in all. Considering, though, that I could not run OpenCV on my own computer, I would have probably tracked a macintosh down and tried running the library on these computers instead

If not, perhaps I could have looked into employing some other techniques mentioned in the lecture. Maybe learning on sampling methods, filters and a variety of tools to make sure the data that gets produced and which would flow will have a specific grade of precision and be useful for any ideas that may arise throughout this course

I probably should also state that I would continue exploring possibilities in relation to integrating a working video output that I managed to appropriate from a Processing forum together with the OpenCV logic that did not work in my case


Issues!


Quite a few. Firstly, interaction based on user's movement. But it would have to rather analyze the behavior of the user rather than entice them to do anything (due to Kinect being withdrawn from the market).

There are some other ideas to consider such as:

- Is there a debounce function in Processing?
- How does printing PCBs work?
- Are there any better quality ultrasonic sensors out there?
- What is the best way to filter data/signals? For the purpose of the first assignment, oversampling is the idea